# Low Parameter Count
Lite Whisper Large V3 Turbo Fast
Apache-2.0
Lite-Whisper is a compressed version of OpenAI Whisper, utilizing LiteASR technology to significantly reduce model size while maintaining high accuracy.
Speech Recognition
Transformers

L
efficient-speech
99
2
Lite Whisper Large V3
Apache-2.0
Lite-Whisper is a compressed version based on OpenAI Whisper, utilizing LiteASR technology to reduce model size while maintaining high accuracy.
Speech Recognition
Transformers

L
efficient-speech
70
2
Ruri Reranker Stage1 Small
Apache-2.0
The Ruri Reranker is a general-purpose Japanese reranking model specifically designed to improve the relevance ranking of Japanese text retrieval results. The small version maintains high performance while having a smaller parameter count.
Text Embedding Japanese
R
cl-nagoya
25
0
Lamini Prompt Enchance
This model is fine-tuned from MBZUAI/LaMini-Flan-T5-248M for prompt enhancement tasks, primarily used for text description enhancement.
Text Generation
Transformers

L
gokaygokay
930
4
Drug Ollama V3 2
This model is a large language model trained using H2O LLM Studio based on open_llama_3b, specializing in text generation tasks related to the pharmaceutical field.
Large Language Model
Transformers English

D
Ketak-ZoomRx
99
3
Deeplabv3 Mobilevit Xx Small
Other
A lightweight semantic segmentation model pre-trained on the PASCAL VOC dataset, combining MobileViT and DeepLabV3 architectures
Image Segmentation
Transformers

D
apple
1,571
10
Deeplabv3 Mobilevit X Small
Other
A lightweight vision Transformer model combining MobileNetV2 and Transformer modules, suitable for mobile semantic segmentation tasks.
Image Segmentation
Transformers

D
apple
268
3
Mobilevit Xx Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the strengths of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
6,077
16
Mobilevit X Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the advantages of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
1,062
6
Bert Base Uncased Squadv1 X1.96 F88.3 D27 Hybrid Filled Opt V1
MIT
A question-answering model fine-tuned and optimized on SQuAD v1 based on BERT-base uncased, retaining 43% of original weights through pruning techniques, achieving 1.96x faster inference speed
Question Answering System
Transformers English

B
madlag
20
0
Featured Recommended AI Models